To analyze this characteristic of vulnerability, we developed an automated deep learning method for detecting microvessels in intravascular optical coherence tomography (IVOCT) images. A total of 8,403 IVOCT image frames from 85 lesions and 37 normal segments were analyzed. Manual annotation was done using a dedicated software (OCTOPUS) previously developed by our group. Data augmentation in the polar (r,{\theta}) domain was applied to raw IVOCT images to ensure that microvessels appear at all possible angles. Pre-processing methods included guidewire/shadow detection, lumen segmentation, pixel shifting, and noise reduction. DeepLab v3+ was used to segment microvessel candidates. A bounding box on each candidate was classified as either microvessel or non-microvessel using a shallow convolutional neural network. For better classification, we used data augmentation (i.e., angle rotation) on bounding boxes with a microvessel during network training. Data augmentation and pre-processing steps improved microvessel segmentation performance significantly, yielding a method with Dice of 0.71+/-0.10 and pixel-wise sensitivity/specificity of 87.7+/-6.6%/99.8+/-0.1%. The network for classifying microvessels from candidates performed exceptionally well, with sensitivity of 99.5+/-0.3%, specificity of 98.8+/-1.0%, and accuracy of 99.1+/-0.5%. The classification step eliminated the majority of residual false positives, and the Dice coefficient increased from 0.71 to 0.73. In addition, our method produced 698 image frames with microvessels present, compared to 730 from manual analysis, representing a 4.4% difference. When compared to the manual method, the automated method improved microvessel continuity, implying improved segmentation performance. The method will be useful for research purposes as well as potential future treatment planning.
translated by 谷歌翻译
Thin-cap fibroatheroma (TCFA) and plaque rupture have been recognized as the most frequent risk factor for thrombosis and acute coronary syndrome. Intravascular optical coherence tomography (IVOCT) can identify TCFA and assess cap thickness, which provides an opportunity to assess plaque vulnerability. We developed an automated method that can detect lipidous plaque and assess fibrous cap thickness in IVOCT images. This study analyzed a total of 4,360 IVOCT image frames of 77 lesions among 41 patients. To improve segmentation performance, preprocessing included lumen segmentation, pixel-shifting, and noise filtering on the raw polar (r, theta) IVOCT images. We used the DeepLab-v3 plus deep learning model to classify lipidous plaque pixels. After lipid detection, we automatically detected the outer border of the fibrous cap using a special dynamic programming algorithm and assessed the cap thickness. Our method provided excellent discriminability of lipid plaque with a sensitivity of 85.8% and A-line Dice coefficient of 0.837. By comparing lipid angle measurements between two analysts following editing of our automated software, we found good agreement by Bland-Altman analysis (difference 6.7+/-17 degree; mean 196 degree). Our method accurately detected the fibrous cap from the detected lipid plaque. Automated analysis required a significant modification for only 5.5% frames. Furthermore, our method showed a good agreement of fibrous cap thickness between two analysts with Bland-Altman analysis (4.2+/-14.6 micron; mean 175 micron), indicating little bias between users and good reproducibility of the measurement. We developed a fully automated method for fibrous cap quantification in IVOCT images, resulting in good agreement with determinations by analysts. The method has great potential to enable highly automated, repeatable, and comprehensive evaluations of TCFAs.
translated by 谷歌翻译
区分动作是按预期执行的,还是预期的动作失败是人类不仅具有的重要技能,而且对于在人类环境中运行的智能系统也很重要。但是,由于缺乏带注释的数据,认识到一项行动是无意的还是预期的,是否会失败。尽管可以在互联网中发现无意或失败动作的视频,但高注释成本是学习网络的主要瓶颈。因此,在这项工作中,我们研究了对无意采取行动预测的自学代表学习的问题。虽然先前的作品学习基于本地时间社区的表示形式,但我们表明需要视频的全局上下文来学习三个下游任务的良好表示:无意的动作分类,本地化和预期。在补充材料中,我们表明学习的表示形式也可用于检测视频中的异常情况。
translated by 谷歌翻译
高度需要对气泡流图像进行自动化和可靠的处理,以分析综合实验系列的大型数据集。由于记录的图像中重叠的气泡投影而引起了特定的困难,这使单个气泡的识别高度复杂。最近的方法着重于将深度学习算法用于此任务,并且已经证明了此类技术的高潜力。主要困难是能够处理不同的图像条件,较高的气体体积分数以及部分遮挡气泡的隐藏段的正确重建。在目前的工作中,我们试图通过基于卷积神经网络(CNN)测试两种以前和两种单独的方法来解决这些观点,以解决后者。为了验证我们的方法论,我们创建了使用合成图像的测试数据集,这些图像进一步证明了我们合并方法的功能和局限性。可以访问生成的数据,代码和训练的模型,以促进实验图像中气泡识别的研究领域的进一步发展。
translated by 谷歌翻译
近年来,基于生理信号的认证表现出伟大的承诺,因为其固有的对抗伪造的鲁棒性。心电图(ECG)信号是最广泛研究的生物关像,也在这方面获得了最高的关注。已经证明,许多研究通过分析来自不同人的ECG信号,可以识别它们,可接受的准确性。在这项工作中,我们展示了EDITH,EDITH是一种基于深入的ECG生物识别认证系统的框架。此外,我们假设并证明暹罗架构可以在典型的距离指标上使用,以提高性能。我们使用4个常用的数据集进行了评估了伊迪丝,并使用少量节拍表现优于先前的工作。 Edith使用仅单一的心跳(精度为96-99.75%)进行竞争性,并且可以通过融合多个节拍(从3到6个节拍的100%精度)进一步提高。此外,所提出的暹罗架构管理以将身份验证等错误率(eer)降低至1.29%。具有现实世界实验数据的Edith的有限案例研究还表明其作为实际认证系统的潜力。
translated by 谷歌翻译
Temporally locating and classifying action segments in long untrimmed videos is of particular interest to many applications like surveillance and robotics. While traditional approaches follow a two-step pipeline, by generating framewise probabilities and then feeding them to high-level temporal models, recent approaches use temporal convolutions to directly classify the video frames. In this paper, we introduce a multi-stage architecture for the temporal action segmentation task. Each stage features a set of dilated temporal convolutions to generate an initial prediction that is refined by the next one. This architecture is trained using a combination of a classification loss and a proposed smoothing loss that penalizes over-segmentation errors. Extensive evaluation shows the effectiveness of the proposed model in capturing long-range dependencies and recognizing action segments. Our model achieves state-of-the-art results on three challenging datasets: 50Salads, Georgia Tech Egocentric Activities (GTEA), and the Breakfast dataset.
translated by 谷歌翻译